已经证明,深度神经网络(DNN)在解决许多现实问题方面是有效的,但其高计算成本禁止将这些模型部署到边缘设备。修剪,作为将零的方法引入模型重量的方法,已显示是在模型精度和计算效率之间提供良好权衡的有效方法,并且是一种生成压缩模型的广泛使用的方法。然而,修剪的粒度使得重要的权衡。在相同的稀疏性水平上,粗粒结构的稀疏图案在传统硬件上更有效,但导致更差的精度,而细粒度的非结构化稀疏模式可以实现更好的精度,但在现有硬件上效率低下。另一方面,一些现代处理器配备了快速的片上刻痕存储器和聚集/散射引擎,用于在这种存储器上执行间接负载和存储操作。在这项工作中,我们提出了一系列新颖的稀疏模式,命名为聚光散射(GS)模式,以利用Scratchpad存储器和收集/散射引擎来加速神经网络推论。相应地,我们呈现了一种紧凑的稀疏格式。提出的稀疏模式,以及一种新颖的修剪方法,解决了负载不平衡问题,并导致质量接近非结构化稀疏模型的型号,以及靠近结构化稀疏型号的计算效率。我们的实验表明,与传统结构稀疏模式相比,GS模式在精度和计算效率之间始终如一地进行折衷。 GS模式可以以相同的精度级别将DNN组件的运行时间减少两到三次。这是在三个不同的深度学习任务和流行模型中确认,即机器翻译的GNMT,用于图像识别的Reset50,以及用于声学语音识别的Japser。
translated by 谷歌翻译
迄今为止,纳米级的活细胞成像仍然具有挑战性。尽管超分辨率显微镜方法使得能够在光学分辨率下方的亚细胞结构的可视化,但空间分辨率仍然足够远,对于体内生物分子的结构重建仍然足够远(即24nm厚度的微管纤维)。在这项研究中,我们提出了一种A-Net网络,并显示通过基于劣化模型的DWDC算法组合A-Net DeeD学习网络,可以显着改善由共聚焦显微镜捕获的细胞骨架图像的分辨率。利用DWDC算法构建新数据集并利用A-Net神经网络的特征(即,层数较少),我们成功地消除了噪声和絮凝结构,最初干扰了原始图像中的蜂窝结构,并改善了空间分辨率使用相对较小的数据集10次。因此,我们得出结论,将A-Net神经网络与DWDC方法结合的所提出的算法是一种合适的和普遍的方法,用于从低分辨率图像中严格的生物分子,细胞和器官的结构细节。
translated by 谷歌翻译
深度神经网络(DNN)已显示在许多现实生活中提供极好的性能,但它们的大量计算成本和存储要求已阻止它们部署到许多边缘和内部内容(IOT)设备。稀疏的深神经网络,其大多数重量参数是零,可以大大降低模型的计算复杂性和存储器消耗。在实际使用场景中,设备可能遭受不同环境下的可用计算和存储器资源的大波动,并且由于具有大延迟的长尾延长而难以维持服务质量(QoS)。面对现实生活挑战,我们建议培训支持多个稀疏水平的稀疏模型。也就是说,满足权重的分层结构,使得较少稀疏子模型的较少稀疏子模型区域子集的位置和非零参数的位置。以这种方式,可以在推理期间动态地选择适当的稀疏度水平,而存储成本被最小稀疏子模型覆盖。我们已经在各种DNN模型和任务中验证了我们的方法,包括Reset-50,PointNet ++,GNMT和图表注意网络。我们获得稀疏的子模型,平均重量为13.38%,拖鞋14.97%,而准确性也与他们的密集对应物一样好。具有5.38%权重和4.47%的更稀疏的子模型,跨越少量稀疏的跨,只能获得3.25%的相对精度损耗。
translated by 谷歌翻译
最大限度的训练原则,最大限度地减少最大的对抗性损失,也称为对抗性培训(AT),已被证明是一种提高对抗性鲁棒性的最先进的方法。尽管如此,超出了在对抗环境中尚未经过严格探索的最小最大优化。在本文中,我们展示了如何利用多个领域的最小最大优化的一般框架,以推进不同类型的对抗性攻击的设计。特别是,给定一组风险源,最小化最坏情况攻击损失可以通过引入在域集的概率单纯x上最大化的域权重来重新重整为最小最大问题。我们在三次攻击生成问题中展示了这个统一的框架 - 攻击模型集合,在多个输入下设计了通用扰动,并制作攻击对数据转换的弹性。广泛的实验表明,我们的方法导致对现有的启发式策略以及对培训的最先进的防御方法而言,鲁棒性改善,培训对多种扰动类型具有稳健。此外,我们发现,从我们的MIN-MAX框架中学到的自调整域权重可以提供整体工具来解释跨域攻击难度的攻击水平。代码可在https://github.com/wangjksjtu/minmaxsod中获得。
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
For Prognostics and Health Management (PHM) of Lithium-ion (Li-ion) batteries, many models have been established to characterize their degradation process. The existing empirical or physical models can reveal important information regarding the degradation dynamics. However, there is no general and flexible methods to fuse the information represented by those models. Physics-Informed Neural Network (PINN) is an efficient tool to fuse empirical or physical dynamic models with data-driven models. To take full advantage of various information sources, we propose a model fusion scheme based on PINN. It is implemented by developing a semi-empirical semi-physical Partial Differential Equation (PDE) to model the degradation dynamics of Li-ion-batteries. When there is little prior knowledge about the dynamics, we leverage the data-driven Deep Hidden Physics Model (DeepHPM) to discover the underlying governing dynamic models. The uncovered dynamics information is then fused with that mined by the surrogate neural network in the PINN framework. Moreover, an uncertainty-based adaptive weighting method is employed to balance the multiple learning tasks when training the PINN. The proposed methods are verified on a public dataset of Li-ion Phosphate (LFP)/graphite batteries.
translated by 谷歌翻译